VQ VAE


Vector-quantized variational autoencoder (VQ VAE) is a generative model that uses vector quantization to learn discrete latent representations.

DisQ-HNet: A Disentangled Quantized Half-UNet for Interpretable Multimodal Image Synthesis Applications to Tau-PET Synthesis from T1 and FLAIR MRI

Add code
Feb 26, 2026
Viaarxiv icon

Autoregressive Visual Decoding from EEG Signals

Add code
Feb 26, 2026
Viaarxiv icon

Humanizing Robot Gaze Shifts: A Framework for Natural Gaze Shifts in Humanoid Robots

Add code
Feb 25, 2026
Viaarxiv icon

SOM-VQ: Topology-Aware Tokenization for Interactive Generative Models

Add code
Feb 24, 2026
Viaarxiv icon

SceMoS: Scene-Aware 3D Human Motion Synthesis by Planning with Geometry-Grounded Tokens

Add code
Feb 24, 2026
Viaarxiv icon

VP-VAE: Rethinking Vector Quantization via Adaptive Vector Perturbation

Add code
Feb 19, 2026
Viaarxiv icon

VAR-3D: View-aware Auto-Regressive Model for Text-to-3D Generation via a 3D Tokenizer

Add code
Feb 14, 2026
Viaarxiv icon

EXCODER: EXplainable Classification Of DiscretE time series Representations

Add code
Feb 13, 2026
Viaarxiv icon

TLC-Plan: A Two-Level Codebook Based Network for End-to-End Vector Floorplan Generation

Add code
Feb 06, 2026
Viaarxiv icon

Vector Quantized Latent Concepts: A Scalable Alternative to Clustering-Based Concept Discovery

Add code
Feb 02, 2026
Viaarxiv icon